Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add filters

Database
Language
Document Type
Year range
1.
IETE Journal of Research ; 2023.
Article in English | Scopus | ID: covidwho-2269564

ABSTRACT

Task scheduling scenarios require the system designers to have complete information about the resources and their capabilities, along with the tasks and their application-specific requirements. An effective task-to-resource mapping strategy will maximize resource utilization under constraints, while minimizing the task waiting time, which will in-turn maximize the task execution efficiency. In this work, a two-level reinforcement learning algorithm for task scheduling is proposed. The algorithm utilizes a deep-intensive learning stage to generate a deployable strategy for task-to-resource mapping. This mapping is re-evaluated at specific execution breakpoints, and the strategy is re-evaluated based on the incremental learning from these breakpoints. In order to perform incremental learning, real-time parametric checking is done on the resources and the tasks;and a new strategy is devised during execution. The mean task waiting time is reduced by 20% when compared with standard algorithms like Dynamic and Integrated Resource Scheduling, Improved Differential Evolution, and Q-learning-based Improved Differential Evolution;while the resource utilization is improved by more than 15%. The algorithm is evaluated on datasets from different domains like Coronavirus disease (COVID-19) datasets of public domain, National Aeronautics and Space Administration (NASA) datasets and others. The proposed method performs consistently on all the datasets. © 2023 IETE.

SELECTION OF CITATIONS
SEARCH DETAIL